automation bias

Terms from Artificial Intelligence: humans at the heart of algorithms

Automation bias is the tendency for humans to agree with computer advice or be swayed by computational data even when the automated system. This can be problematic when the system fails, is error prone or doesn't take into account all aspects of a situation. For example, an AI system may use a small number of features to advise on whether someone is at high or low risk when they enter an hospital emergnecy department; if the person is obviously not breathing the paramedic or nurse shoudl clearly igbore this, but there can be a tendency to assume the computer knows best. This can also lead to bias for example, the {COMPAS}} system that has been used by judges as part of parole decisions in the US has been criticsed as embodying elements of racial bias. ULtimately the judge makes the decsion, but may be swayed by the system's assessment.

Used on pages 434, 555, 561